18 research outputs found

    Keep Ballots Secret: On the Futility of Social Learning in Decision Making by Voting

    Full text link
    We show that social learning is not useful in a model of team binary decision making by voting, where each vote carries equal weight. Specifically, we consider Bayesian binary hypothesis testing where agents have any conditionally-independent observation distribution and their local decisions are fused by any L-out-of-N fusion rule. The agents make local decisions sequentially, with each allowed to use its own private signal and all precedent local decisions. Though social learning generally occurs in that precedent local decisions affect an agent's belief, optimal team performance is obtained when all precedent local decisions are ignored. Thus, social learning is futile, and secret ballots are optimal. This contrasts with typical studies of social learning because we include a fusion center rather than concentrating on the performance of the latest-acting agents

    Distributed Hypothesis Testing with Social Learning and Symmetric Fusion

    Full text link
    We study the utility of social learning in a distributed detection model with agents sharing the same goal: a collective decision that optimizes an agreed upon criterion. We show that social learning is helpful in some cases but is provably futile (and thus essentially a distraction) in other cases. Specifically, we consider Bayesian binary hypothesis testing performed by a distributed detection and fusion system, where all decision-making agents have binary votes that carry equal weight. Decision-making agents in the team sequentially make local decisions based on their own private signals and all precedent local decisions. It is shown that the optimal decision rule is not affected by precedent local decisions when all agents observe conditionally independent and identically distributed private signals. Perfect Bayesian reasoning will cancel out all effects of social learning. When the agents observe private signals with different signal-to-noise ratios, social learning is again futile if the team decision is only approved by unanimity. Otherwise, social learning can strictly improve the team performance. Furthermore, the order in which agents make their decisions affects the team decision.Comment: 10 pages, 7 figure

    Team decision making with social learning: human subject experiments

    Full text link
    We demonstrate that human decision-making agents do social learning whether it is beneficial or not. Specifically, we consider binary Bayesian hypothesis testing with multiple agents voting sequentially for a team decision, where each one observes earlier-acting agents' votes as well as a conditionally independent and identically distributed private signal. While the best strategy (for the team objective) is to ignore the votes of earlier-acting agents, human agents instead tend to be affected by others' decisions. Furthermore, they are almost equally affected in the team setting as when they are incentivized only for individual correctness. These results suggest that votes of earlier-acting agents should be withheld (not shared as public signals) to improve team decision-making performance; humans are insufficiently rational to innately apply the optimal decision rules that would ignore the public signals.Accepted manuscrip

    Quantization of prior probabilities in Bayesian group decision-making

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 85-87).In Bayesian hypothesis testing, a decision is made based on a prior probability distribution over the hypotheses, an observation with a known conditional distribution given the true hypothesis, and an assignment of costs to different types of errors. In a setting with multiple agents and the principle of "one person, one vote", the decisions of agents are typically combined by the majority rule. This thesis considers collections of group hypothesis testing problems over which the prior itself varies. Motivated by constraints on memory or computational resources of the agents, quantization of the prior probabilities is introduced, leading to novel analysis and design problems. Two hypotheses and three agents are sufficient to reveal various intricacies of the setting. This could arise with a team of three referees deciding by majority rule on whether a foul was committed. The referees face a collection of problems with different prior probabilities, varying by player. This scenario illustrates that even as all referees share the goal of making correct foul calls, opinions on the relative importance of missed detections and false alarms can vary. Whether cost functions are identical and whether referees use identical quantizers create variants of the problem. When referees are identical in both their cost functions and their quantizers for the prior probabilities, it is optimal for the referees to use the same decision rules. The homogeneity of the referees simplifies the problem to an equivalent single-referee problem with a lower-variance effective noise. Then the quantizer optimization problem is reduced to a problem previously solved by Varshney and Varshney (2008). Centroid and nearest-neighbor conditions that are necessary for quantizer optimality are provided. On the contrary, the problem becomes complicated when variations in cost functions or quantizers are allowed. In this case, decision-making and quantization problems create strategic form games; the decision-making game does always have a Nash equilibrium. The analysis shows that conflict between referees, in the form of variation in cost functions, makes overall team performance worse. Two ways to optimize quantizers are introduced and compared to each other. In the setting that referees purely collaborate, in the form of having equal cost functions, the effect of variations between their quantizers is analyzed. It is shown that the referees have incentive to use different quantizers rather than identical quantizers even though their cost functions are identical. In conclusion, a diverse team with a common goal performs best.by Joong Bum Rhim.S.M

    Beliefs and expertise in sequential decision making

    Full text link
    This work explores a sequential decision making problem with agents having diverse expertise and mismatched beliefs. We consider an N-agent sequential binary hypothesis test in which each agent sequentially makes a decision based not only on a private observation, but also on previous agents’ decisions. In addition, the agents have their own beliefs instead of the true prior, and have varying expertise in terms of the noise variance in the private signal. We focus on the risk of the last-acting agent, where precedent agents are selfish. Thus, we call this advisor(s)-advisee sequential decision making. We first derive the optimal decision rule by recursive belief update and conclude, counterintuitively, that beliefs deviating from the true prior could be optimal in this setting. The impact of diverse noise levels (which means diverse expertise levels) in the two-agent case is also considered and the analytical properties of the optimal belief curves are given. These curves, for certain cases, resemble probability weighting functions from cumulative prospect theory, and so we also discuss the choice of Prelec weighting functions as an approximation for the optimal beliefs, and the possible psychophysical optimality of human beliefs. Next, we consider an advisor selection problem where in the advisee of a certain belief chooses an advisor from a set of candidates with varying beliefs. We characterize the decision region for choosing such an advisor and argue that an advisee with beliefs varying from the true prior often ends up selecting a suboptimal advisor, indicating the need for a social planner. We close with a discussion on the implications of the study toward designing artificial intelligence systems for augmenting human intelligence.https://arxiv.org/abs/1812.04419First author draf

    Social teaching: being informative vs. being right in sequential decision making

    Full text link
    We consider sequential Bayesian binary hypothesis testing where each individual agent makes a binary decision motivated only by minimization of her own perception of the Bayes risk. The information available to each agent is an initial belief, a private signal, and decisions of all earlier-acting agents; it is follows that each agent should apply a standard Bayesian update of her belief as in social learning. The effect of the set of initial beliefs on the decision-making performance of the last agent is studied. In general, the optimal initial beliefs are not equal to the actual prior probability. When the private signals are described by Gaussian likelihoods, they also are not haphazard, but rather follow a systematic pattern: The earlier-acting agents should act as if the prior probability is larger than it is in reality when the true prior probability is small, and vice versa. We interpret this as being open-minded toward the unlikely hypothesis. Such open-mindedness increases but does not maximize the mutual information between the true hypothesis and a decision.First author draf

    Quantization of Prior Probabilities for Collaborative Distributed Hypothesis Testing

    Full text link
    This paper studies the quantization of prior probabilities, drawn from an ensemble, for distributed detection and data fusion. Design and performance equivalences between a team of N agents tied by a fixed fusion rule and a more powerful single agent are obtained. Effects of identical quantization and diverse quantization are compared. Consideration of perceived common risk enables agents using diverse quantizers to collaborate in hypothesis testing, and it is proven that the minimum mean Bayes risk error is achieved by diverse quantization. The comparison shows that optimal diverse quantization with K cells per quantizer performs as well as optimal identical quantization with N(K-1)+1 cells per quantizer. Similar results are obtained for maximum Bayes risk error as the distortion criterion.Comment: 11 page

    Beliefs in Decision-Making Cascades

    Full text link
    This work explores a social learning problem with agents having nonidentical noise variances and mismatched beliefs. We consider an NN-agent binary hypothesis test in which each agent sequentially makes a decision based not only on a private observation, but also on preceding agents' decisions. In addition, the agents have their own beliefs instead of the true prior, and have nonidentical noise variances in the private signal. We focus on the Bayes risk of the last agent, where preceding agents are selfish. We first derive the optimal decision rule by recursive belief update and conclude, counterintuitively, that beliefs deviating from the true prior could be optimal in this setting. The effect of nonidentical noise levels in the two-agent case is also considered and analytical properties of the optimal belief curves are given. Next, we consider a predecessor selection problem wherein the subsequent agent of a certain belief chooses a predecessor from a set of candidates with varying beliefs. We characterize the decision region for choosing such a predecessor and argue that a subsequent agent with beliefs varying from the true prior often ends up selecting a suboptimal predecessor, indicating the need for a social planner. Lastly, we discuss an augmented intelligence design problem that uses a model of human behavior from cumulative prospect theory and investigate its near-optimality and suboptimality.Comment: final version, to appear in IEEE Transactions on Signal Processin

    Aggregation and influence in teams of imperfect decision makers

    No full text
    Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.Cataloged from PDF version of thesis.Includes bibliographical references (pages 137-141).Bayesian hypothesis testing inevitably requires prior probabilities of hypotheses. Motivated by human decision makers, this thesis studies how binary decision making is performed when the decision-making agents use imperfect prior probabilities. Three detection models with multiple agents are investigated: distributed detection with symmetric fusion, sequential detection with social learning, and distributed detection with symmetric fusion and social learning. In the distributed detection with symmetric fusion, we consider the agents to be a team aiming to minimize the Bayes risk of the team's decision. In this model, incorrect beliefs reduce the chance of the agents from being right so always lead to an increase in the Bayes risk of the decision-making team. In contrast, the role of beliefs is more complicated in the sequential detection model with social learning, where agents observe public signals, which are decisions made by other agents. Since each agent affects the minimum possible Bayes risk for subsequent agents, she may have a mixed objective including her own Bayes risk and the Bayes risks of subsequent agents. For an earlier-acting agent, it is shown that being informative to later-acting agents is different from being right. When private signals are described by Gaussian likelihoods, informative earlier-acting agents should be open-minded toward the unlikely hypothesis. Social learning helps imperfect agents who have favorable incorrect beliefs outperform perfect agents who have correct beliefs. Compared to in the sequential detection model, social learning is less influential in the distributed detection model with symmetric fusion. This is because social learning induces the evolution of the fusion rule in the distributed detection model, which countervails the other effect of social learning-belief update. In particular, social learning is futile when the agents observe conditionally independent and identically distributed private signals or when the agents require unanimity to make a decision. Since social learning is ineffective, imperfect agents cannot outperform perfect agents, unlike in the sequential detection model. Experiments about human behavior were performed in team decision-making situations when people should optimally ignore public signals. The experiments suggest that when people vote with equal qualities of information, the ballots should be secret.by Joong Bum Rhim.Ph. D

    Distributed Hypothesis Testing With Social Learning and Symmetric Fusion

    No full text
    corecore